×
AI Voice Scams Are Surging — Here’s How to Protect Yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money.

  • The UK-based online bank reports that scammers can clone a person’s voice from just three seconds of audio found online, such as in social media videos.
  • Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses.

Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning scams.

  • Over a quarter of respondents reported being targeted by such scams in the past year.
  • 46% of those surveyed were unaware that these scams existed.
  • 8% of respondents admitted they would send money if requested by a friend or family member, even if the call seemed suspicious.

Cybersecurity expert sounds alarm: Lisa Grahame, chief information security officer at Starling Bank, emphasizes the need for increased awareness and caution.

  • Grahame points out that people often post content online containing their voice without realizing it could make them vulnerable to fraudsters.
  • The bank recommends establishing a “safe phrase” with loved ones to verify identity during phone calls.

Safeguarding against voice-cloning scams: Starling Bank offers advice on how to protect oneself from these sophisticated frauds.

  • The recommended “safe phrase” should be simple, random, and easy to remember, but different from other passwords.
  • Sharing the safe phrase via text is discouraged, but if necessary, the message should be deleted once received.

AI advancements raise concerns: The increasing sophistication of AI in mimicking human voices has sparked worries about potential misuse.

  • There are growing fears about AI’s ability to help criminals access bank accounts and spread misinformation.
  • OpenAI, the creator of ChatGPT, has developed a voice replication tool called Voice Engine but has not made it publicly available due to concerns about synthetic voice misuse.

Broader implications for AI security: The rise of AI voice-cloning scams underscores the need for enhanced cybersecurity measures and public awareness.

  • As AI technology continues to advance, it’s likely that new forms of fraud and deception will emerge, requiring ongoing vigilance from both individuals and institutions.
  • The situation highlights the importance of responsible AI development and deployment, balancing innovation with safeguards against potential misuse.
This bank says ‘millions’ of people could be targeted by AI voice-cloning scams

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.